10 research outputs found

    The medical pause in simulation training

    Get PDF
    The medical pause, stopping current performance for a short time for additional cognitive activities, can potentially advance patient safety and learning in medicine. Yet, to date, we do not have a theoretical understanding of why pausing skills should be taught as a professional skill, nor empirical evidence of how pausing affects performance and learning. To address this gap, this thesis investigates the effects of pausing in medical training theoretically and empirically. For the empirical investigation, a computer-based simulation was used for the task environment, and eye-tracking and log data to assess performance

    How prior knowledge affects problem-solving performance in a medical simulation game:Using game-logs and eye-tracking

    No full text
    Computer-based simulation games provide an environment to train complex problem-solving skills. Yet, it is largely unknown how the in-game performance of learners varies with different levels of prior knowledge. Based on theories of complex-skill acquisition (e.g., 4C/ID), we derive four performance aspects that prior knowledge may affect: (1) systematicity in approach, (2) accuracy in visual attention and motor reactions, (3) speed in performance, and (4) cognitive load. This study aims to empirically test whether prior knowledge affects these four aspects of performance in a medical simulation game for resuscitation skills training. Participants were 24 medical professionals (experts, with high prior knowledge) and 22 medical students (novices, with low prior knowledge). After pre-training, they all played one scenario, during which game-logs and eye-movements were collected. A cognitive-load questionnaire ensued. During game play, experts demonstrated a more systematic approach, higher accuracy in visual selection and motor reaction, and a higher performance speed than novices. Their reported levels of cognitive load were lower. These results indicate that prior knowledge has a substantial impact on performance in simulation games, opening up the possibility of using our measures for performance assessment

    Measuring Cognitive Load in Virtual Reality Training via Pupillometry

    No full text
    Pupillometry is known as a reliable technique to measure cognitive load in learning and performance. However, its applicability to virtual reality (VR) environments, an emerging technology for simulation-based training, has not been well-verified in educational contexts. Specifically, the VR display causes light reflexes that confound task-evoked pupillary responses (TEPRs), impairing cognitive load measures. Through this pilot study, we validated whether task difficulty can predict cognitive load as measured by TEPRs corrected for the light reflex and if these TEPRs correlate with cognitive load self-ratings and performance. 14 students in health sciences performed observation tasks in two conditions: difficult versus easy tasks, whilst watching a VR scenario in home health care. Then, a cognitive load self-rating ensued. We used a VR system with a built-in eye-tracker and a photosensor installed to assess pupil diameter and light intensity during the scenario. Employing a method from the human-computer interaction field, we determined TEPRs by modeling the pupil light reflexes using a baseline. As predicted, the difficult task caused significantly larger TEPRs than the easy task. Only in the difficult task condition did TEPRs positively correlate with the performance measure. These results suggest that TEPRs are valid measures of cognitive load in VR training when corrected for the light reflex. It opens up possibilities to use real-time cognitive load for assessment and instructional design for VR training. Future studies should test our findings with a larger sample size, in various domains, involving complex VR functions such as haptic interaction

    The Validity of Physiological Measures to Identify Differences in Intrinsic Cognitive Load

    Get PDF
    A sample of 33 experiments was extracted from the Web-of-Science database over a 5-year period (2016–2020) that used physiological measures to measure intrinsic cognitive load. Only studies that required participants to solve tasks of varying complexities using a within-subjects design were included. The sample identified a number of different physiological measures obtained by recording signals from four main body categories (heart and lungs, eyes, skin, and brain), as well as subjective measures. The overall validity of the measures was assessed by examining construct validity and sensitivity. It was found that the vast majority of physiological measures had some level of validity, but varied considerably in sensitivity to detect subtle changes in intrinsic cognitive load. Validity was also influenced by the type of task. Eye-measures were found to be the most sensitive followed by the heart and lungs, skin, and brain. However, subjective measures had the highest levels of validity. It is concluded that a combination of physiological and subjective measures is most effective in detecting changes in intrinsic cognitive load

    Different effects of pausing on cognitive load in a medical simulation game

    No full text
    In medical training, allowing learners to take pauses during tasks is known to enhance performance. Cognitive load theory assumes that insertion of pauses positively affects cognitive load, thereby enhancing performance. However, empirical studies on how allowing and taking pauses affects cognitive load and performance in dynamic task environments are scarce. We investigated the pause effect, using a computerized simulation game in emergency medicine. Medical students (N = 70) were randomly assigned to one of two conditions: simulation with (n = 40) and without (n = 30) the option to take pauses. All participants played the same two scenarios, during which game logs and eye-tracking data were recorded. Overall, both cognitive load and performance were higher in the condition with pauses than in the one without. The act of pausing, however, temporarily lowered cognitive load, especially during intense moments. Two different manifestations of the pause effect were identified: (1) by stimulating additional cognitive and meta-cognitive processes, pauses increased overall cognitive load; and (2) through relaxation, the act of pausing temporarily decreased heightened cognitive load. Consequently, our results suggest that in order to enhance students’ performance and learning it is important that we encourage them to utilize the different effects of pausing

    Eye tracking: empirical foundations for a minimal reporting guideline

    Get PDF
    In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that 10 reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section 6)

    Retraction Note:Eye tracking

    Get PDF
    The authors have retracted this article because a number of statements are supported by two references, Holmqvist (2015) and Holmqvist (2016), which should not have been used. Saga Lee Örbom, Ignace T. C. Hooge, Diederick C. Niehorster, Robert G. Alexander, Richard Andersson, Jeroen S. Benjamins, Pieter Blignaut, Anne-Marie Brouwer, Lewis L. Chuang, Kirsten A. Dalrymple, Denis Drieghe, Matt J. Dunn, Ulrich Ettinger, Susann Fiedler, Tom Foulsham, Jos N. van der Geest, Dan Witzner Hansen, Samuel B. Hutton, Enkelejda Kasneci, Alan Kingstone, Paul C. Knox, Ellen M. Kok, Helena Lee, Joy Yeonjoo Lee, Jukka M. Leppänen, Stephen Macknik, Päivi Majaranta, Susana Martinez-Conde, Antje Nuthmann, Marcus Nyström, Jacob L. Orquin, Jorge Otero-Millan, Soon Young Park, Stanislav Popelka, Frank Proudlock, Frank Renkewitz, Austin Roorda, Michael Schulte-Mecklenbeck, Bonita Sharif, Frederick Shic, Mark Shovman, Mervyn G. Thomas, Ward Venrooij, Raimondas Zemblys and Roy S. Hessels agree with this retraction. Kenneth Holmqvist is deceased

    Eye tracking: empirical foundations for a minimal reporting guideline

    No full text
    In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that 10 reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section 6).<br/
    corecore